VRSpeech is a simple application that illustrates ways of using the QuickTime VR API. VRSpeech uses Apple's Speech Recognition Manager to allow voice navigation of a panorama or object node. For instance, the user can navigate within a node by uttering commands like "pan left 90 degrees," "zoom in," "zoom out," and the like. The complete vocabulary is defined in the file LMSpeech.h.
The Speech Recognition Manager is available on Developer CDs and via the Web, at http://www.speech.apple.com. In addition, there are several good articles in develop magazine, Issue 27 (September 1996) that explain how to integrate the SRM with your application.
VRSpeech is built on top of VRShell; it directly uses the MacFramework.c file of VRShell. Other source files in VRShell are simply replaced. The file TestFunctions.c contains the bulk of the speech recognition processing.
The file LMSpeech.rsrc contains a resource that holds the language model of words and phrases that we want to listen for. That resource was built using the SRLanguageModeler utility developed by the Speech Recognition group. You can modify the language model (defined in the file LMSpeech.h) and generate a new resource, if you wish.